Basic Issues in Lagrangian Optimization
نویسنده
چکیده
These lecture notes review the basic properties of Lagrange multipliers and constraints in problems of optimization from the perspective of how they influence the setting up of a mathematical model and the solution technique that may be chosen. Conventional problem formulations with equality and inequality constraints are discussed first, and Lagrangian optimality conditions are presented in a general form which accommodates range constraints on the variables without the need for introducing constraint functions for such constraints. Particular attention is paid to the distinction between convex and nonconvex problems and how convexity can be recognized and taken advantage of. Extended problem statements are then developed in which penalty expressions can be utilized as an alternative to black-and-white constraints. Lagrangian characterizations of optimality for such problems closely resemble the ones for conventional problems and in the presence of convexity take a saddle point form which offers additional computational potential. Extended linear-quadratic programming is explained as a special case. 1. FORMULATION OF OPTIMIZATION PROBLEMS Everywhere in applied mathematics the question of how to choose an appropriate mathematical model has to be answered by art as much as by science. The model must be rich enough to provide useful qualitative insights as well as numerical answers that don’t mislead. But it can’t be too complicated or it will become intractable for analysis or demand data inputs that can’t be supplied. In short, the model has to reflect the right balance between the practical issues to be addressed and the mathematical approaches that might be followed. This means, of course, that to do a good job of formulating a problem a modeler needs to be aware of the pros and cons of various problem statements that might serve as templates, such as standard linear programming, quadratic programming, and the like. Knowledge of which features are advantageous, versus which are potentially troublesome, is essential. In optimization the difficulties can be all the greater because the key ideas are often different from the ones central to rest of applied mathematics. For instance, in many subjects the crucial division is between linear and nonlinear models, but in optimization it is between convex and nonconvex. Yet convexity is not a topic much treated in a general mathematical education. Problems of optimization always focus on the maximization or minimization of some function over some set, but the way the function and set are specified can have a great impact. One distinction is whether the decision variables involved are “discrete” or “continuous.” Discrete variables with integer values, in particular logical variables which can only have the values 0 or 1, are appropriate in circumstances where a decision has to be made whether to build a new facility, or to start up a process with fixed initial costs. But the introduction of such variables in a model is a very serious step; the problem may become much harder to solve or even to analyze. Here we’ll concentrate on continuous variables. The conventional way to think about an optimization problem in finitely many continuous variables is that a function f0(x) is to be minimized over all the points x = (x1, . . . , xn) in some subset C of the finite-dimensional real vector space lR. (Maximization is equivalent to minimization through multiplication by −1.) The set C is considered to be specified by a number of side conditions on x which are called constraints, the most common form being equality constraints fi(x) = 0 and inequality constraints fi(x) ≤ 0. As a catch-all for anything else, there may be an “abstract constraint” x ∈ X for some subset X ⊂ lR. For instance, X can be thought of as indicating nonnegativity conditions, or upper and lower bounds, on some of the variables xj appearing as components of x. Such conditions
منابع مشابه
Acceleration of Lagrangian Method for the Vehicle Routing Problem with Time Windows
The analytic center cutting plane method (ACCPM) is one of successful methods to solve nondifferentiable optimization problems. In this paper ACCPM is used for the first time in the vehicle routing problem with time windows (VRPTW) to accelerate lagrangian relaxation procedure for the problem. At first the basic cutting plane algorithm and its relationship with column generation method is clari...
متن کاملCombining Lagrangian Decomposition and Excessive Gap Smoothing Technique for Solving Large-Scale Separable Convex Optimization Problems
A new algorithm for solving large-scale convex optimization problems with a separable objective function is proposed. The basic idea is to combine three techniques: Lagrangian dual decomposition, excessive gap and smoothing. The main advantage of this algorithm is that it dynamically updates the smoothness parameters which leads to numerically robust performance. The convergence of the algorith...
متن کاملIntegrated Routing and Network Flow Control Embracing Two Layers of TCP/IP Networks – Methodological Issues
A cross-layer network optimization problem is considered. It involves network and transport layers, treating both routing and flows as decision variables. Due to the nonconvexity of the capacity constraints, when using Lagrangian relaxation method a duality gap causes numerical instability. It is shown that the rescue preserving separability of the problem may be the application of the augmente...
متن کاملOn the Method of Lagrange Multipliers
and there are no inequality constraints (i.e. there are no fi(x) i = 1, . . . , m). We simply write the p equality constraints in the matrix form as Cx− d = 0. The basic idea in Lagrangian duality is to take the constraints in (1) into account by augmenting the objective function with a weighted sum of the constraint functions. We define the Lagrangian L : R ×R ×R → R associated with the proble...
متن کاملAn Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems
By p-power (or partial p-power) transformation, the Lagrangian function in nonconvex optimization problem becomes locally convex. In this paper, we present a neural network based on an NCP function for solving the nonconvex optimization problem. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...
متن کاملA Modified Barrier-Augmented Lagrangian Method for Constrained Minimization
We present and analyze an interior-exterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lagrangian methods. It is based on the MBAL function, which treats inequality constraints with a modifi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007